Tags: transformer* + deep learning*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The article explores the architectural changes that enable DeepSeek's models to perform well with fewer resources, focusing on Multi-Head Latent Attention (MLA). It discusses the evolution of attention mechanisms, from Bahdanau to Transformer's Multi-Head Attention (MHA), and introduces Grouped-Query Attention (GQA) as a solution to MHA's memory inefficiencies. The article highlights DeepSeek's competitive performance despite lower reported training costs.

  2. A detailed explanation of the Transformer model, a key architecture in modern deep learning for tasks like neural machine translation, focusing on components like self-attention, encoder and decoder stacks, positional encoding, and training.

  3. This article is part of a series titled ‘LLMs from Scratch’, a complete guide to understanding and building Large Language Models (LLMs). In this article, we discuss the self-attention mechanism and how it is used by transformers to create rich and context-aware transformer embeddings.

    The Self-Attention mechanism is used to add context to learned embeddings, which are vectors representing each word in the input sequence. The process involves the following steps:

    1. Learned Embeddings: These are the initial vector representations of words, learned during the training phase. The weights matrix, storing the learned embeddings, is stored in the first linear layer of the Transformer architecture.

    2. Positional Encoding: This step adds positional information to the learned embeddings. Positional information helps the model understand the order of the words in the input sequence, as transformers process all words in parallel, and without this information, they would lose the order of the words.

    3. Self-Attention: The core of the Self-Attention mechanism is to update the learned embeddings with context from the surrounding words in the input sequence. This mechanism determines which words provide context to other words, and this contextual information is used to produce the final contextualized embeddings.

  4. Combined with the growing trend of multimodality, or models that combine language, image, and other types of capabilities, we may see a trend of AI models operating more like a committee of different components rather than a monolithic block. This approach actually has many conceptual similarities to a set of interesting ideas described by Marvin Minsky and Seymour Paypert from the early days of AI.

  5. 2021-08-13 Tags: , , , , , by klotz
  6. 2021-05-09 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "transformer+deep learning"

About - Propulsed by SemanticScuttle